66 research outputs found

    Smart objects as building blocks for the internet of things

    Get PDF
    The combination of the Internet and emerging technologies such as nearfield communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application

    Engagement-aware computing: Modelling user engagement from mobile contexts

    Get PDF
    In this paper, we examine the potential of using mobile context to model user engagement. Taking an experimental approach, we systematically explore the dynamics of user engagement with a smartphone through three different studies. Specifically, to understand the feasibility of detecting user engagement from mobile context, we first assess an EEG artifact with 10 users and observe a strong correlation between automatically detected engagement scores and user's subjective perception of engagement. Grounded on this result, we model a set of application level features derived from smartphone usage of 10 users to detect engagement of a usage session using a Random Forest classifier. Finally, we apply this model to train a variety of contextual factors acquired from smartphone usage logs of 130 users to predict user engagement using an SVM classifier with a F1-Score of 0.82. Our experimental results highlight the potential of mobile contexts in designing engagement-aware applications and provide guidance to future explorations

    HeadScan: A Wearable System for Radio-Based Sensing of Head and Mouth-Related Activities

    Get PDF
    The popularity of wearables continues to rise. However, possible applications, and even their raw functionality are constrained by the types of sensors that are currently available. Accelerometers and gyroscopes struggle to capture complex user activities. Microphones and image sensors are more powerful but capture privacy sensitive information. Physiological sensors are obtrusive to users as they often require skin contact and must be placed at certain body positions to function. In contrast, radio-based sensing uses wireless radio signals to capture movements of different parts of the body, and therefore provides a contactless and privacy-preserving approach to detect and monitor human activities. In this paper, we contribute to the search for new sensing modalities for the next generation of wearable devices by exploring the feasibility of mobile radiobased human activity recognition. We believe radio-based sensing has the potential to fundamentally transform wearables as we currently know them. As the first step to achieve our vision, we have designed and developed HeadScan, a first-of-its-kind wearable for radio-based sensing of a number of human activities that involve head and mouth movements. HeadScan only requires a pair of small antennas placed on the shoulder and collar and one wearable unit worn on the arm or the belt of the user. Head- Scan uses the fine-grained CSI measurements extracted from radio signals and incorporates a novel signal processing pipeline that converts the raw CSI measurements into the targeted human activities. To examine the feasibility and performance of HeadScan, we have collected approximate 50.5 hours data from seven users. Our wide-ranging experiments include comparisons to a conventional skin-contact audio-based sensing approach to tracking the same set of head and mouth-related activities. Our experimental results highlight the enormous potential of our radio-based mobile sensing approach and provide guidance to future explorations

    BodyScan: Enabling Radio-based Sensing on Wearable Devices for Contactless Activity and Vital Sign Monitoring

    Get PDF
    Wearable devices are increasingly becoming mainstream consumer products carried by millions of consumers. However, the potential impact of these devices is currently constrained by fundamental limitations of their built-in sensors. In this paper, we introduce radio as a new powerful sensing modality for wearable devices and propose to transform radio into a mobile sensor of human activities and vital signs. We present BodyScan, a wearable system that enables radio to act as a single modality capable of providing whole-body continuous sensing of the user. BodyScan overcomes key limitations of existing wearable devices by providing a contactless and privacy-preserving approach to capturing a rich variety of human activities and vital sign information. Our prototype design of BodyScan is comprised of two components: one worn on the hip and the other worn on the wrist, and is inspired by the increasingly prevalent scenario where a user carries a smartphone while also wearing a wristband/smartwatch. This prototype can support daily usage with one single charge per day. Experimental results show that in controlled settings, BodyScan can recognize a diverse set of human activities while also estimating the user's breathing rate with high accuracy. Even in very challenging real-world settings, BodyScan can still infer activities with an average accuracy above 60% and monitor breathing rate information a reasonable amount of time during each day

    Unsupervised domain adaptation for robust sensory systems

    Get PDF
    Despite significant advances in the performance of sensory inference models, their poor robustness to changing environmental conditions and hardware remains a major hurdle for widespread adoption. In this paper, we introduce the concept of unsupervised domain adaptation which is a technique to adapt sensory inference models to new domains only using unlabeled data from the target domain. We present two case-studies to motivate the problem and highlight some of our recent work in this space. Finally, we discuss the core challenges in this space that can trigger further ubicomp research on this topic

    FlexAdapt: Flexible cycle-consistent adversarial domain adaptation

    Get PDF
    Unsupervised domain adaptation is emerging as a powerful technique to improve the generalizability of deep learning models to new image domains without using any labeled data in the target domain. In the literature, solutions which perform cross-domain feature-matching (e.g., ADDA), pixel-matching (CycleGAN), and combination of the two (e.g., CyCADA) have been proposed for unsupervised domain adaptation. Many of these approaches make a strong assumption that the source and target label spaces are the same, however in the real-world, this assumption does not hold true. In this paper, we propose a novel solution, FlexAdapt, which extends the state-of-the-art unsupervised domain adaptation approach of CyCADA to scenarios where the label spaces in source and target domains are only partially overlapped. Our solution beats a number of state-of-the-art baseline approaches by as much as 29% in some scenarios, and represent a way forward for applying domain adaptation techniques in the real world

    FRuDA: Framework for Distributed Adversarial Domain Adaptation

    Get PDF
    Breakthroughs in unsupervised domain adaptation (uDA) can help in adapting models from a label-rich source domain to unlabeled target domains. Despite these advancements, there is a lack of research on how uDA algorithms, particularly those based on adversarial learning, can work in distributed settings. In real-world applications, target domains are often distributed across thousands of devices, and existing adversarial uDA algorithms -- which are centralized in nature -- cannot be applied in these settings. To solve this important problem, we introduce FruDA: an end-to-end framework for distributed adversarial uDA. Through a careful analysis of the uDA literature, we identify the design goals for a distributed uDA system and propose two novel algorithms to increase adaptation accuracy and training efficiency of adversarial uDA in distributed settings. Our evaluation of FruDA with five image and speech datasets shows that it can boost target domain accuracy by up to 50% and improve the training efficiency of adversarial uDA by at least 11 times

    DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices

    Get PDF
    © 2016 IEEE. Breakthroughs from the field of deep learning are radically changing how sensor data are interpreted to extract the high-level information needed by mobile apps. It is critical that the gains in inference accuracy that deep models afford become embedded in future generations of mobile apps. In this work, we present the design and implementation of DeepX, a software accelerator for deep learning execution. DeepX signif- icantly lowers the device resources (viz. memory, computation, energy) required by deep learning that currently act as a severe bottleneck to mobile adoption. The foundation of DeepX is a pair of resource control algorithms, designed for the inference stage of deep learning, that: (1) decompose monolithic deep model network architectures into unit- blocks of various types, that are then more efficiently executed by heterogeneous local device processors (e.g., GPUs, CPUs); and (2), perform principled resource scaling that adjusts the architecture of deep models to shape the overhead each unit-blocks introduces. Experiments show, DeepX can allow even large-scale deep learning models to execute efficently on modern mobile processors and significantly outperform existing solutions, such as cloud-based offloading

    A Vision for Adaptive and Generalizable Audio-Sensing Systems

    Get PDF
    We present our vision and key research directions for next generation audio and speech-sensing systems, to make them robust against variabilities in sensing hardware and operating conditions

    Mapping vicon motion tracking to 6-axis IMU data for wearable activity recognition

    Get PDF
    No description supplie
    corecore